Goto

Collaborating Authors

 Wheat Ridge


A group of new astronauts join NASA under the Artemis program and could be the first to step on Mars

Daily Mail - Science & tech

It has been more than two years in the making, but 13 new astronauts have finally joined NASA under the mission that will bring the first female to the moon -and some may be the first humans to step on Mars. The candidates, who have been training since 2017, participated in the first public graduation ceremony for astronauts on Friday at the American space Agency's Johnson Space Center in Houston. The group includes six women and seven men, two of them were Canadian Space Agency (CSA) astronauts, and all were chosen from record-setting pool of more than 18,000 applicants. During the ceremony, each of the bright-eyed graduates were given a silver pin that symbolizes the Mercury 7 – NASA's first astronaut group that was selected in 1959. They will then be awarded a gold pin once they completed their first spaceflights.


This Colorado hospital is using Qventus' AI to improve operations - MedCity News

#artificialintelligence

Wheat Ridge, Colorado-based Lutheran Medical Center, which is part of Broomfield, Colorado-based SCL Health, wanted to improve its operations. "We determined a few years ago that for a hospital like ours that has a very challenging payer mix, … running an extremely cost-efficient operation was necessary for stability," said Lutheran Medical Center president and CEO Grant Wicklund in a phone interview. "One of the ways we identified we could become even more cost-efficient was to be absolutely world-class at having the appropriate length of stay." Noomi Hirsch, the medical center's vice president of operations, took the lead on the effort. In a phone interview, she explained that the organization was able to hit low-hanging fruit areas, but eventually started looking at options in the technology world to tackle the problem.


Emergence of Grounded Compositional Language in Multi-Agent Populations

Mordatch, Igor, Abbeel, Pieter

arXiv.org Artificial Intelligence

By capturing statistical patterns in large corpora, machine learning has enabled significant advances in natural language processing, including in machine translation, question answering, and sentiment analysis. However, for agents to intelligently interact with humans, simply capturing the statistical patterns is insufficient. In this paper we investigate if, and how, grounded compositional language can emerge as a means to achieve goals in multi-agent populations. Towards this end, we propose a multi-agent learning environment and learning methods that bring about emergence of a basic compositional language. This language is represented as streams of abstract discrete symbols uttered by agents over time, but nonetheless has a coherent structure that possesses a defined vocabulary and syntax. We also observe emergence of non-verbal communication such as pointing and guiding when language communication is unavailable.


Emergence of Grounded Compositional Language in Multi-Agent Populations

Mordatch, Igor (OpenAI) | Abbeel, Pieter (UC Berkeley)

AAAI Conferences

By capturing statistical patterns in large corpora, machine learning has enabled significant advances in natural language processing, including in machine translation, question answering, and sentiment analysis. However, for agents to intelligently interact with humans, simply capturing the statistical patterns is insufficient. In this paper we investigate if, and how, grounded compositional language can emerge as a means to achieve goals in multi-agent populations. Towards this end, we propose a multi-agent learning environment and learning methods that bring about emergence of a basic compositional language. This language is represented as streams of abstract discrete symbols uttered by agents over time, but nonetheless has a coherent structure that possesses a defined vocabulary and syntax. We also observe emergence of non-verbal communication such as pointing and guiding when language communication is unavailable.


The Pragmatics of Indirect Commands in Collaborative Discourse

Lamm, Matthew, Eric, Mihail

arXiv.org Artificial Intelligence

Today's artificial assistants are typically prompted to perform tasks through direct, imperative commands such as \emph{Set a timer} or \emph{Pick up the box}. However, to progress toward more natural exchanges between humans and these assistants, it is important to understand the way non-imperative utterances can indirectly elicit action of an addressee. In this paper, we investigate command types in the setting of a grounded, collaborative game. We focus on a less understood family of utterances for eliciting agent action, locatives like \emph{The chair is in the other room}, and demonstrate how these utterances indirectly command in specific game state contexts. Our work shows that models with domain-specific grounding can effectively realize the pragmatic reasoning that is necessary for more robust natural language interaction.


Using AI to Teach AI: Lessons from an Online AI Class

Goel, Ashok K. (Georgia Institute of Technology) | Joyner, David A. (Udacity and Georgia Institute of Technology)

AI Magazine

In fall 2014, we launched a foundational course in artificial intelligence (CS7637: Knowledge-Based AI) as part of the Georgia Institute of Technology's Online Master of Science in Computer Science program. We incorporated principles and practices from the cognitive and learning sciences into the development of the online AI course. We also integrated AI techniques into the instruction of the course, including embedding 100 highly focused intelligent tutoring agents in the video lessons. By now, more than 2000 students have taken the course. Evaluations have indicated that OMSCS students enjoy the course compared to traditional courses, and more importantly, that online students have matched residential students' performance on the same assessments. In this article, we present the design, delivery, and evaluation of the course, focusing on the use of AI for teaching AI. We also discuss lessons we learned for scaling the teaching and learning of AI.


Cognition as a Service: An Industry Perspective

Spohrer, Jim (IBM Research, Almaden) | Banavar, Guruduth (IBM Research)

AI Magazine

Recent advances in cognitive computing componentry combined with other factors are leading to commercially viable cognitive systems. From chips to smart phones to public and private clouds, industrial strength “cognition as a service” is beginning to appear at all scales in business and society. Furthermore, in the age of zettabytes on the way to yottabytes, the designers, engineers, and managers of future smart systems will depend on cognition as a service. Cognition as a service can help unlock the mysteries of big data and ultimately boost the creativity and productivity of professionals and their teams, the productive output of industries and organizations, as well as the GDP (gross domestic product) of regions and nations. In this and the next decade, cognition as a service will allow us to re-image work practices, augmenting and scaling expertise to transform professions, industries, and regions.


Using Analogy to Cluster Hand-Drawn Sketches for Sketch-Based Educational Software

Chang, Maria D. (Northwestern University) | Forbus, Kenneth D. (Northwestern University)

AI Magazine

One of the major challenges to building intelligent educational software is determining what kinds of feedback to give learners. Useful feedback makes use of models of domain-specific knowledge, especially models that are commonly held by potential students. To empirically determine what these models are, student data can be clustered to reveal common misconceptions or common problem-solving strategies. This article describes how analogical retrieval and generalization can be used to cluster automatically analyzed hand-drawn sketches incorporating both spatial and conceptual information. We use this approach to cluster a corpus of hand-drawn student sketches to discover common answers. Common answer clusters can be used for the design of targeted feedback and for assessment.


Approaching the Symbol Grounding Problem with Probabilistic Graphical Models

Tellex, Stefanie (Massachusetts Institute of Technology) | Kollar, Thomas (Massachusetts Institute of Technology) | Dickerson, Steven (Massachusetts Institute of Technology) | Walter, Matthew R. (Massachusetts Institute of Technology) | Banerjee, Ashis Gopal (Massachusetts Institute of Technology) | Teller, Seth (Massachusetts Institute of Technology) | Roy, Nicholas (Massachusetts Institute of Technology)

AI Magazine

n order for robots to engage in dialog with human teammates, they must have the ability to map between words in the language and aspects of the external world. A solution to this symbol grounding problem (Harnad, 1990) would enable a robot to interpret commands such as “Drive over to receiving and pick up the tire pallet.” In this article we describe several of our results that use probabilistic inference to address the symbol grounding problem. Our specific approach is to develop models that factor according to the linguistic structure of a command. We first describe an early result, a generative model that factors according to the sequential structure of language, and then discuss our new framework, generalized grounding graphs (G3). The G3 framework dynamically instantiates a probabilistic graphical model for a natural language input, enabling a mapping between words in language and concrete objects, places, paths and events in the external world. We report on corpus-based experiments where the robot is able to learn and use word meanings in three real-world tasks: indoor navigation, spatial language video retrieval, and mobile manipulation.


An Application of Transfer to American Football: From Observation of Raw Video to Control in a Simulated Environment

Stracuzzi, David J. (Sandia National Laboratories) | Fern, Alan (Oregon State University) | Ali, Kamal (Stanford University) | Hess, Robin (Oregon State University) | Pinto, Jervis (Oregon State University) | Li, Nan (Carnegie Mellon University) | Konik, Tolga (Stanford University) | Shapiro, Daniel G. (Institute for the Study of Learning and Expertise)

AI Magazine

Automatic transfer of learned knowledge from one task or domain to another offers great potential to simplify and expedite the construction and deployment of intelligent systems. In practice however, there are many barriers to achieving this goal. In this article, we present a prototype system for the real-world context of transferring knowledge of American football from video observation to control in a game simulator. We trace an example play from the raw video through execution and adaptation in the simulator, highlighting the system's component algorithms along with issues of complexity, generality, and scale. We then conclude with a discussion of the implications of this work for other applications, along with several possible improvements.